77 research outputs found

    IOT Security Against Network Anomalies through Ensemble of Classifiers Approach

    Get PDF
    The use of IoT networks to monitor critical environments of all types where the volume of data transferred has greatly expanded in recent years due to a large rise in all forms of data. Since so many devices are connected to the Internet of Things (IoT), network and device security is of paramount importance. Network dynamics and complexity are still the biggest challenges to detecting IOT attacks. The dynamic nature of the network makes it challenging to categorise them using a single classifier. To identify the abnormalities, we therefore suggested an ensemble classifier in this study. The proposed ensemble classifier combines the independent classifiers ELM, Nave Byes (NB), and the k-nearest neighbour (KNN) in bagging and boosting configurations. The proposed technique is evaluated and compared using the MQTTset, a dataset focused on the MQTT protocol, which is frequently utilised in IoT networks. The analysis demonstrates that the proposed classifier outperforms the baseline classifiers in terms of classification accuracy, precision, recall, and F-score

    Distributed Apportioning in a Power Network for providing Demand Response Services

    Full text link
    Greater penetration of Distributed Energy Resources (DERs) in power networks requires coordination strategies that allow for self-adjustment of contributions in a network of DERs, owing to variability in generation and demand. In this article, a distributed scheme is proposed that enables a DER in a network to arrive at viable power reference commands that satisfies the DERs local constraints on its generation and loads it has to service, while, the aggregated behavior of multiple DERs in the network and their respective loads meet the ancillary services demanded by the grid. The Net-load Management system for a single unit is referred to as the Local Inverter System (LIS) in this article . A distinguishing feature of the proposed consensus based solution is the distributed finite time termination of the algorithm that allows each LIS unit in the network to determine power reference commands in the presence of communication delays in a distributed manner. The proposed scheme allows prioritization of Renewable Energy Sources (RES) in the network and also enables auto-adjustment of contributions from LIS units with lower priority resources (non-RES). The methods are validated using hardware-in-the-loop simulations with Raspberry PI devices as distributed control units, implementing the proposed distributed algorithm and responsible for determining and dispatching realtime power reference commands to simulated power electronics interface emulating LIS units for demand response.Comment: 7 pages, 11 Figures, IEEE International Conference on Smart Grid Communication

    WETTABILITY OF OXIDE THIN FILMS PREPARED BY PULSED LASER DEPOSITION : NEW INSIGHTS.

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Minimal Intubating Dose of Succinylcholine: A Comparative Study of 0.4, 0.5 and 0.6 mg/kg Dose

    Get PDF
    Muscle relaxants are integral part of modern balanced anesthesia and succinylcholine, a depolarizing drug, is in use despite its adverse effects. The excellent intubating condition, fastest onset and shortest duration of action make it an excellent choice for anesthesiologists. The conventional dose of 1.5-2 mg/kg is commonly used for obtaining relaxation for intubation. This study was conducted with much smaller dose of succinylcholine as 0.4, 0.5 and 0.6 mg/kg to evaluate the acceptable intubating dose at 60 seconds, which was unlikely to have any untoward/side effects

    Federated Classification in Hyperbolic Spaces via Secure Aggregation of Convex Hulls

    Full text link
    Hierarchical and tree-like data sets arise in many applications, including language processing, graph data mining, phylogeny and genomics. It is known that tree-like data cannot be embedded into Euclidean spaces of finite dimension with small distortion. This problem can be mitigated through the use of hyperbolic spaces. When such data also has to be processed in a distributed and privatized setting, it becomes necessary to work with new federated learning methods tailored to hyperbolic spaces. As an initial step towards the development of the field of federated learning in hyperbolic spaces, we propose the first known approach to federated classification in hyperbolic spaces. Our contributions are as follows. First, we develop distributed versions of convex SVM classifiers for Poincar\'e discs. In this setting, the information conveyed from clients to the global classifier are convex hulls of clusters present in individual client data. Second, to avoid label switching issues, we introduce a number-theoretic approach for label recovery based on the so-called integer BhB_h sequences. Third, we compute the complexity of the convex hulls in hyperbolic spaces to assess the extent of data leakage; at the same time, in order to limit the communication cost for the hulls, we propose a new quantization method for the Poincar\'e disc coupled with Reed-Solomon-like encoding. Fourth, at server level, we introduce a new approach for aggregating convex hulls of the clients based on balanced graph partitioning. We test our method on a collection of diverse data sets, including hierarchical single-cell RNA-seq data from different patients distributed across different repositories that have stringent privacy constraints. The classification accuracy of our method is up to 11%\sim 11\% better than its Euclidean counterpart, demonstrating the importance of privacy-preserving learning in hyperbolic spaces

    Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited Edge

    Full text link
    Edge devices can benefit remarkably from federated learning due to their distributed nature; however, their limited resource and computing power poses limitations in deployment. A possible solution to this problem is to utilize off-the-shelf sparse learning algorithms at the clients to meet their resource budget. However, such naive deployment in the clients causes significant accuracy degradation, especially for highly resource-constrained clients. In particular, our investigations reveal that the lack of consensus in the sparsity masks among the clients may potentially slow down the convergence of the global model and cause a substantial accuracy drop. With these observations, we present \textit{federated lottery aware sparsity hunting} (FLASH), a unified sparse learning framework for training a sparse sub-model that maintains the performance under ultra-low parameter density while yielding proportional communication benefits. Moreover, given that different clients may have different resource budgets, we present \textit{hetero-FLASH} where clients can take different density budgets based on their device resource limitations instead of supporting only one target parameter density. Experimental analysis on diverse models and datasets shows the superiority of FLASH in closing the gap with an unpruned baseline while yielding up to 10.1%\mathord{\sim}10.1\% improved accuracy with 10.26×\mathord{\sim}10.26\times fewer communication, compared to existing alternatives, at similar hyperparameter settings. Code is available at \url{https://github.com/SaraBabakN/flash_fl}.Comment: Accepted in TMLR, https://openreview.net/forum?id=iHyhdpsny

    Byzantine-Resilient Federated Learning with Heterogeneous Data Distribution

    Full text link
    For mitigating Byzantine behaviors in federated learning (FL), most state-of-the-art approaches, such as Bulyan, tend to leverage the similarity of updates from the benign clients. However, in many practical FL scenarios, data is non-IID across clients, thus the updates received from even the benign clients are quite dissimilar. Hence, using similarity based methods result in wasted opportunities to train a model from interesting non-IID data, and also slower model convergence. We propose DiverseFL to overcome this challenge in heterogeneous data distribution settings. Rather than comparing each client's update with other client updates to detect Byzantine clients, DiverseFL compares each client's update with a guiding update of that client. Any client whose update diverges from its associated guiding update is then tagged as a Byzantine node. The FL server in DiverseFL computes the guiding update in every round for each client over a small sample of the client's local data that is received only once before start of the training. However, sharing even a small sample of client's data with the FL server can compromise client's data privacy needs. To tackle this challenge, DiverseFL creates a Trusted Execution Environment (TEE)-based enclave to receive each client's sample and to compute its guiding updates. TEE provides a hardware assisted verification and attestation to each client that its data is not leaked outside of TEE. Through experiments involving neural networks, benchmark datasets and popular Byzantine attacks, we demonstrate that DiverseFL not only performs Byzantine mitigation quite effectively, it also almost matches the performance of OracleSGD, where the server only aggregates the updates from the benign clients
    corecore